perm filename PVL[P,JRA]1 blob
sn#394839 filedate 1978-11-09 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Pascal has reached critical mass it has flashed through the
C00042 00003 .next page
C00053 ENDMK
C⊗;
Pascal has reached critical mass; it has flashed through the
mainframe, mini, and now the micro field. It has much to
support its popularity; however it represents but one point of view
about computing.
This article offers a contrasting position,
as "personified" by LISP.
The style of this article is "philosophical"
rather than "substantive".
That is, we will not discuss relative expressive power or syntax; rather,
that the forces and attitudes which shaped the languages, and
the kinds of problems which the languages address, represent very diverse
views of computation. Once this philosophical perspective has been outlined
we relate these observations to the issues which drive personal computing
--both in terms of applications and languages. Finally
we discuss the possibilities for reconciliation between LISP-like languages and
Pascal-like languages.
I remember comments about
another "philosophical paper" about computing where one reader said
%3"You don't read this paper, you smoke it!"%1; I hope this paper will generate
more fire than smoke. My "flame" involves an attitude
about how one is to coerce programmers into writing correct programs.
Niklaus Wirth, in the 1969 Software Engineering Conference said:
.begin indent 6,6,6;
"I would like to discuss
the trend towards conversationality in our tools. There has been, since the
development of timesharing and on-line consoles, a very hectic trend towards
development of systems which allow the interactive development of
programs. Now this is certainly nice in a way, but it has its dangers, and I
am particularly wary of them because this conversational usage has not only
gained acceptance among software engineers but also in universities where students
are trained in programming. My worry is that the facility
of quick response leads to sloppy working habits and, since
the students are going to be our future software engineers, this might be
rather detrimental to the field as a whole".
.end
Wirth was addressing programming development in particular, but the
question of "sloppy work habits" has a broader connotation. Many
industry and educational personnel still question the viability
of interactive composition of any kind of text. Supposedly on-line composition
is wasteful and inefficient; one should carefully think out the text,
writing it out in long-hand or by typewriter, then revise and amend,
cut and paste, until the document is finished or the author is exhausted.
To me, that is inefficiency! Of course, many computer text editors are
not well suited to creative composition, being little more than
teletype editors. Even with many display editors one still has to resort
to hard copy for effective editing. That is the nub of our problem:
most display editors are line-oriented and run on "glass teletypes";
"glass teletypes" were derived from hard copy terminals, which were in turn
derivatives of keypunches. Certainly an old way of thinking about
text production. However if the display media is truly utilized, then a
quantum improvement in editing facilities can result: information can be
presented as in a "video-book" complete with rapid random access paging and
on-screen cutting and pasting.
Given such a tool, creative composition becomes a joy. Freedom of expression
is improved; the ideas flow freely from mind to screen.
A phenomenon of "egoless" composition results; the author
does not have to measure the %3content%1 of a passage against the
%3agony%1 involved in its creation. As a result, the author is more willing
to modify and polish the document, even experimenting with major
revisions in order to present the topic in its best light.
What is truly exciting is that the technology
for such editing systems is now of modest cost; one needs only to break with
the past ways of thinking about editors.
What does this have to do with our subject? There is a strong analogy
between the editing task and the programming task. We must examine the
programming task the same way that we looked at editors.
Alas, most of us still program on "virtual keypunches".
We may even use one of these
excellent display editors, to create our card deck, but
we create a linear string of characters, and throw the completed
program at a compiler; given that there are no syntax errors, we try to
run the program, probably debugging with dumps, print statements, or
debuggers which give us information in terms of the compiled code.
If an error is found, we throw away the world, returning to the editor
to modify the source, recompile and try again.
Given these conditions, sloppy work habits are inevitable.
In fact, these %3conditions%1 are sloppy!
Now we are at a critical point in programming: do we build a better
"programming keypunch" or do we look for a programmer's tool analogous
to the display editor? I would opt for the latter, and would claim that
the paradigm of LISP programming is the appropriate model from which we should
begin.
Before that discussion, we shall suggest several features which one might
advocate for a better "programmer's keypunch", and then examine some of the
historical background on language development which might predispose
one to such a device.
First, let's examine the conservative "keypunch" approach.
The traditional edit-compile-debug cycle is improved by a display editor,
operating with
a compiler which can return the user to the editor upon indication of a
syntax error, pointing at the source statement which caused the error.
A quick edit, and the syntax check cycle begins again. However, once
the program is compiled, we are still at the mercy of primitive
debugging techniques. One solution is to require that the user specify
more information about the program, indicating expected behavior of program
modules with the assumption that the compiling phase can be made more
knowledgeable and check the consistency of the expectations against
the realities present in the user's encoded algorithms. This way
we will have greater assurance that the code which gets to the
debug portion of the cycle does operate as expected. In general these
user expectations are difficult to express and checking their
consistency is even more problematic.
Syntactic consistency checks based on simple properties of
the programmer's variables can be checked in a reasonably straightforward
manner. Such properties are called a %2type structure%1. The compiler
checks that each occurrence of an identifier always has the same type;
with simple variables, this involves checking such things as "is it an
integer?", "is it a boolean?"; with procedures the compiler checks that
the programmer's calls are consistent in the number of arguments
supplied and the types of the parameters. To be sure, such consistency
is important; however it usually requires that the programmer
supply declarations throughout the program segments, specifying the
type-structure information of each and every identifier.
This approach offers a short-term
payoff in an increased probability that the generated code will perform
as expected. But then, there's still the strong possibility that the
programmer's errors are not so transparent that type-checking can
detect them. One approach is to improve the consistency phase, strengthening
the compiler's ability to detect expectation-reality inconsistencies.
This requires that programmers express more and more of their expectations
along with their algorithms; a program verification system results.
Such a system will guarantee that the expectations which were expressed
are consistent
with the written algorithm.
The debugging problem goes away; but is replaced with a specification task
which may be as difficult to accomplish as the debugging.
But what about programs whose "expectations" are not easily formulated?
The primary example is Artificial Intelligence programming wherein
often it is only the final algorithm itself which expresses the expectation.
The creative programming process is driven by a partially understood
phenomenon; the programming effort is to capture as much of that phenomenon
as possible. One programs in a world of great uncertainty, much like driving a
car. We may have a reasonably detailed roadmap, but the path may involve
traffic lights, accidents and detours. We do not return home and restart
everytime we encounter a unexpected situation; we correct on the fly and
continue. We claim that much of modern computation has this exploratory
character of AI.
What about programs which contain issues involving
more than correctness? For example, it would be most difficult to express the
specifications of a text editor in terms of static conditions. To be sure,
certain aspects of editors involve correctness, but aesthetics of
usability are equally important. An experimental process is involved
requiring modification and iteration. In fact the %3raison d'etre%1
for editors is modification. If we made no typing errors we'd need no editor.
For example, when we discover an error in a text file we don't erase
the whole file and retype it, we edit the error and keep the result.
However in the traditional debugging paradigm, when we discover a
runtime error we throw away all the computation, edit the file and
restart. If that computation has taken several hours (or even minutes) to ellicit
the bug, it will be most painful to restart.
The real point is that most programming involves a deeper issue than debugging;
it involves modifiability; programs are always being modified because
we change our expectations.
Debugging is only a very minor component of the problem of program
modification.
Therefore, as we progress to more and more complex programming tasks,
programming modification will take on a more fundamental role.
Don't try to stamp out program modification as a manifestation
of human frailty and error; it is a fundamental
ingredient of our field. Cater to modification at the innermost levels
of our programming systems.
Assembly language continues to dominate systems design not because of
inate masochism, but modifiability. There is close (but low-level) match
between the language, the debugger, and the execution device.
A language which expects to dethrone machine language must offer an equally
compelling environment.
One AI researcher is quoted as characterizing
programming as "debugging a blank piece of paper". To that I would add:
%3programming is modifying a blank screen%1.
Get the machine into the programming process as soon as possible,
but it must be done right. In that context we will see a rise in
productivity comparable to that experienced in the editing task when
a true display editor is used. We can expect "egoless programming",
and good work habits to evolve naturally. With such tools the promise of
structured programming can become a reality. That is, it is the %3activity%1
of programming which involves the structuring. One should not expect
to find structure in a program anymore than one can look at the final board
positions of a chess game and tell whether that game was played by
masters or amateurs. Imagine programming systems whose "moves"
involve step-wise refinement of partially elaborated programs; imagine
the transcript of those keystrokes as comparable to the recording
of moves in the chess game.
The transcript of the program development would be available
for analysis by programming students and teachers.
Such a system would truly support structured programming!
Current practice does not
support such activity. We are forced to submit character strings to our
language systems, even though our methodology tells us to compose
in terms of structure. Until that disparity is resolved we should expect
little improvement in the "software problem".
Most programming languages do very little to reinforce
the creative process of algorithm discovery and creation. They
are more concerned with the %3execution%1
of already constructed algorithms. This is a natural outgrowth of their
ancestry: a numerical computation era in which the emphasis was placed
on minimizing computer time at the expense of programmer time.
Also the programming problems of that day involved the transcription
of well-specified numerical algorithms into an equally precise programming
language.
Times and economics have changed.
The problems are more complex and no longer as well-specified as
the those of numerical analysis.
Now computers are cheap and programmers are expensive;
we need techniques to speed the development of correct programs.
Certainly the verification efforts are aimed in this direction, but verification
typically is an after-the-fact reconciliation of a completed algorithm
with some descriptive specification of its behavior.
We need languages which support the creative and exploratory phases of
program development. Of course, one may question this.
Wirth in the Computing Surveys
writes:
.begin indent 6,6,6
..."It is therefore
entirely possible that in the future a more interactive mode of operation
between compiler and programmer will emerge, at least for the very
sophisticated professional. The purpose of this interaction would not,
however, be the development of an algorithm or the debugging of a program, but
rather its %3improvement under invariance of correctness%1.[Wirth's
emphasis]
.end
I most definitely agree with the emphatic phrase; we must develop such
program transformation systems. However, it is equally important to
improve the program development phase.
Exploratory programming, which is the hallmark of AI and which,
to a very large extent, occurs in the creative stages of any programming
task, is best done with an un-typed interactive language like LISP.
Strong typed languages like Pascal only
confuse and obfuscate the formulative stages.
LISP's basic unit
is an expression, meaning that every LISP construct computes a value.
LISP tends to emphasize the applicative nature of algorithms,
using "function application" as its basic computational notation and
using recursion to express the control aspects of the process;
recent research has indicated that many common recursive schemes
can be %3executed%1 in an iterative fashion. That is, the evaluation
mechanism need not involve the usual stack-oriented overhead. One should
not confuse the recursive notation with the evaluation mechanism.
The basic unit of Pascal is a statement, rather than an expression.
That is, Pascal's units tend to be executed for %3effect%1 rather than
%3value%1.
It is interesting that John Backus, the "father of Fortran" has
spent considerable time in recent years studying and advocating applicative
languages, turning from the more traditional imperative, statement-oriented
languages like Fortran, Algol, and Pascal.
In his Turing lecture, Backus writes:
.begin indent 6,6,6;
... This world of statements is a disorderly one, with few useful mathematical
properties. Structured programming can be seen as a modest effort to introduce
some order into this chaotic world, but it accomplishes little in attacking
the fundamental problems created by the word-at-a-time von Neumann style
of programming, with its primitive use of loops, subscripts, and branching
flow of control.
.end
Of course things are not all that black-and-white. Pascal has applicative
aspects and LISP has imperative aspects. The difference is again one of emphasis
--of philosophy: the expression versus the statement.
The difference has a mighty influence on the language design; expressions
lead to calculator-like interactions; statements lead to computer-like
programs.
Wirth, in the Computing Surveys writes
.begin indent 6,6,6;
... we must recognize the strong and undeniable influence that our language
exerts on our way of thinking, and in fact defines and delimits
the abstract space in which we can formulate--give form to--our thoughts.
.end
That is a critical point, true in natural language as well as in programming
languages. In fact, the problem goes deeper than programming language.
One's attitude about computation is deeply connected with the human
interface problem. Those who grew up with batch-oriented computation
have a quite different frame of reference from that of a person
who has experienced truly interactive access.
These attitudes permeate the whole computing approach. If one is to
interact with a calculator, then we must expect to present expressions and
receive values; that implies an immediacy which is antithetical to
type structures. So too is interactive design; the interactive creation and
running of partially specified programs is difficult to reconcile with language
processors which expect total information about all program segments and
identifiers.
By now many of you are sure that I am preaching total heresy, advocating a
return to the old days of hack-and-patch programming. I am definitely not.
Rather, I am advocating the development of programming systems which
incorporate the ideas of structured stepwise development of programs;
systems in which the interactive %3development%1 is as flexibly supported
as the interactive %3debugging%1 of modern LISP systems. We must do for
programming what the page oriented display has done for editing.
But such systems are only half the software problem. We must dramatically
improve the way people think about programming. Computer programming is
probably the most difficult and challenging enterprise which mankind has
ever undertaken. Until programming methodologies are supported by the
programming tools and until
programmers are expected to spend as much time
perfecting their craft as professional engineers and poets, we should
expect little improvement in software quality.
We must put a stronger emphasis on the education of programmers, rather
than restrict the expressive powers of the languages. Consider the tools of a
more traditional craftsman: in the hands of an amateur those tools can be
deadly, whereas the craftsman can develop exquisite objects. We do not
propose that all tools be dulled appropriately so that the amateurs cannot
do themselves injury! We educate them in the proper use of, and respect for,
the instruments and expect that the craftsman's tool remain sharp.
The emphasis is properly placed on the individual.
Do not confuse education with the
mass-merchandizing of armies of programmers
whose competence is sealed in a cautious, uncreative world of dull tools.
The AI community
is the craft of the tool builder; their tools are the sharpest and most
incisive of the computer field. It is this kind of tool which should appeal to
the personal computer person. It is not the elegance of BASIC which has
made it the standard personal computer
language; it is BASIC's interactive nature, allowing
quick experimentation with programming ideas, which accounts for its
longevity. This interactive flavor can be grafted onto a Pascal-like
language, but it is much more difficult to do this successfully. In the process
one either compromises the tenets of the language (extensions) or
compromises the resulting system.
The essence of personalized computing
has been sort of the "wild west" attitude: open, undisciplined, but
creative and lusty as hell. You are a very healthy sign; today, a modest personal
computer has more power and flexibility than that available at a professional
installation twenty years ago. Each of you has molded your system
according to your desires and economic constraints. But freedom is expensive;
systems and programs become one-of-a-kind. Enter compatibility
and, unless you are careful, exit individuality.
Clearly, these problems are not solely the province of personal computing;
the manufacturers feel the same pressures.
So the real question is: can we bring discipline and order to
programming without curtailing the creativity?
It is important that the personal computer
users understand that they need not give up the interactiveness of
BASIC⊗↓BASIC's longevity, indeed strength, lies in its "friendliness".
That is a critical ingredient of an interactive programming language.
I would guarantee that if BASIC were only available in the traditional
batch-oriented environment, its popularity as a personal computer
language would not have occurred. Similarly with LISP; or Pascal.
That is, it is the total environment in which a language is
situated which is important. The UCSD experience with Pascal illustrates
this point well. The real question then is: can we do better?←
to gain the structure and portability which Pascal is advertising; LISP
offers both. LISP is %3not%1 a special purpose list-processing language;
a modern LISP system has more flexible data handling facilities than
other more recent languages. For example, MACLISP's data types include
arbitrary precision numbers, very flexible record structures (called property
lists), strings, arrays, list structure, and even procedures. One attribute which
leads to LISP's elegance and economy of expression is that all of these
data types are available as values of programming constructs. Thus
LISP procedures may take procedures as values; may return procedures as
values; or may create arrays which are returned as value.
Lest one suspect that such generality imply inefficiency we should note that
LISP as been used as the systems programming language for the MIT LISP machine;
all system software is written in LISP even to the level of
the algorithms which place the characters on the screen; even the micro-code is
written in LISP.
LISP compilers can be as good as those of any other language;
in one experiment, the MACLISP compiler was tested against a Fortran compiler
on some purely numerical examples. The LISP compiler's code was better both in
terms of space and time!
Lest one charge that such generality leads to sloppy programming techniques
we again note that a language is a tool, only as effective as its user.
I would rather sharpen the user than dull the tool. In terms of user aids
LISP excells; a modern LISP system is an integrated programming environment
incorporating editors, debuggers, and compilers. Some LISP's include
sophisticated error recovery modules, allowing the user to
"undo" computations or explore alternative computations. The integration of
these "programmer's assistants" with modern display techniques has begun.
The results are quite impressive. The flexibility of these systems is
due in large part to LISP's unique representation of programs as data
structures.
As a result it is easy to write programs which manipulate
programs. Note that language editors, debuggers, compilers, and program
transformations systems like those advocated by Wirth (above) all
fall into this category.
One also hears dreadful rumors about LISP's syntax (%2L%1ots of %2I%1ritating
%2S%1ingle %2P%1arentheses). First, the regularity of the notation
and the simple syntax more than make up for any initial inconvenience.
However, these syntactic difficulties can also be stiffled directly. It
is quite simple to supply LISP with an ALGOL-like sugared input and output.
Such parsers and unparsers are simple LISP programs.
The above discussion has hinted at a very important aspect of LISP:
LISP %3is%1 a machine language. In the traditional
machine, instructions reside in memory locations just as data does.
It is the access path of the CPU which determines how the contents of a location
is to be interpreted. Access by the program counter implies an instruction
fetch; other access implies a data fetch. So too in LISP; data and program
are stored identically; both are presented to the machine in a simple
syntax of lists.
The LISP CPU, called %3eval%1 accesses LISP memory
either to fetch code or data. Instead of the linear sequential representation of
traditional machines, LISP has a tree-like storage scheme. One very exciting
area of investigation involves architectures for LISP-like machines.
Finally, it is often assumed that LISP demands large expensive computers
for its implementation. That too is not true; there are
versions of LISP for the Z-80, 8080, 6800, F8,
LSI-11, and even a version in BASIC. Certainly the machine size will limit
the range of feasible applications; but that is true of any language.
The new class of micro computers is
particularly exciting; they will open up many new
areas --including AI-- for the personal machine.
The new machines
will support very substantial implementations of LISP.
It is particularly important to influence the personal computer
advocate now, given the growth in computing power and the cries
for "compatibility", for "discipline", and for "standardization".
The DOD-1 language effort is but the latest manifestation of this
attitude.
I do not believe that
this "legislative" approach is healthy. I believe that LISP offers
a healthy alternative to the current choices of programming languages
for personal computation.
.next page
BIBLIOGRAPHY
ACM computing surveys Vol 6, no.4, Dec 1974
N. Wirth, "On the composition of well-structured programs", pp247-260
D. Knuth, "Structured Programming with go to statements", pp261-302
K. Jensen and N. Wirth, Pascal user manual and report Springer-Verlag 1974
J. Backus Turing Lecture: "Can Programming be liberated from the von Neumann
style? A Functional style and its algebra of programs" CACM vol 21, no 8 pp613-641
J. Allen Anatomy of LISP, McGraw-Hill 1978
G. Steele & G. Sussman, The Revised Report on Scheme, MIT AI memo 452
Cambridge Jan 1978
G. Steele & G. Sussman, The Art of the Interpreter or,
the Modularity Complex, MIT AI memo 453, Cambridge, May 1978
Teitleman, W. A Display Oriented Programmr's Assistant, Xerox Palo Alto
Research Center, Report #CSL 77-3
Sandewall, E. "Programming in an Interactive Environment: The `LISP' Experience",
ACM Computing Surveys, Vol 10, No. 1, March 1978, pp33-71